Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Altenbach, Holm; Eremeyev, Victor A (Ed.)We propose an analytical approach to solving nonlocal generalizations of the Euler–Bernoulli beam. Specifically, we consider a version of the governing equation recently derived under the theory of peridynamics. We focus on the clamped–clamped case, employing the natural eigenfunctions of the fourth derivative subject to these boundary conditions. Static solutions under different loading conditions are obtained as series in these eigenfunctions. To demonstrate the utility of our proposed approach, we contrast the series solution in terms of fourth-order eigenfunctions to the previously obtained Fourier sine series solution. Our findings reveal that the series in fourth-order eigenfunctions achieve a given error tolerance (with respect to a reference solution) with ten times fewer terms than the sine series. The high level of accuracy of the fourth-order eigenfunction expansion is due to the fact that its expansion coefficients decay rapidly with the number of terms in the series, one order faster than the Fourier series in our examples.more » « lessFree, publicly-accessible full text available June 18, 2026
-
Abstract Decoder-only Transformer models such as Generative Pre-trained Transformers (GPT) have demonstrated exceptional performance in text generation by autoregressively predicting the next token. However, the efficiency of running GPT on current hardware systems is bounded by low compute-to-memory-ratio and high memory access. In this work, we propose a Process-in-memory (PIM) GPT accelerator, PIM-GPT, which achieves end-to-end acceleration of GPT inference with high performance and high energy efficiency. PIM-GPT leverages DRAM-based PIM designs for executing multiply-accumulate (MAC) operations directly in the DRAM chips, eliminating the need to move matrix data off-chip. Non-linear functions and data communication are supported by an application specific integrated chip (ASIC). At the software level, mapping schemes are designed to maximize data locality and computation parallelism. Overall, PIM-GPT achieves 41 − 137 × , 631 − 1074 × speedup and 123 − 383 × , 320 − 602 × energy efficiency over GPU and CPU baseline on 8 GPT models with up to 1.4 billion parameters.more » « less
-
Planning in a text-based environment continues to be a significant challenge for AI systems. Recent approaches have utilized language models to predict planning domain definitions (e.g., PDDL) but have only been evaluated in closed-domain simulated environments. To address this, we present Proc2PDDL, the first dataset containing open-domain procedural texts paired with expert-annotated PDDL representations. Using this dataset, we evaluate the task of predicting domain actions (parameters, preconditions, and effects). We experiment with various large language models (LLMs) and prompting mechanisms, including a novel instruction inspired by the zone of proximal development (ZPD), which reconstructs the task as incremental basic skills. Our results demonstrate that Proc2PDDL is highly challenging for end-to-end LLMs, with GPT-3.5’s success rate close to 0% and GPT-4o’s 38%. With ZPD instructions, GPT-4o’s success rate increases to 45%, outperforming regular chain-of-thought prompting’s 34%. Our analysis systematically examines both syntactic and semantic errors, providing insights into the strengths and weaknesses of language models in generating domain-specific programs.more » « less
-
Free, publicly-accessible full text available August 4, 2026
An official website of the United States government
